34 research outputs found

    Earth Observation – A Fundamental Input for Crisis Information Systems

    Get PDF
    Space-borne and airborne earth observation (EO) is a highly valuable source of spatio-temporal information promoting the ability for a rapid up-to-date assessment and (near-) real-time monitoring of natural or and man-made hazards and disasters. Such information has become indispensable in present-day disaster management activities. Thereby, EO based technologies have a role to play in each of the four phases of the disaster management cycle (i.e. mitigation, preparedness, response and recovery) with applications grouped into three main stages: - Pre-disaster (preparedness and mitigation): EO-based information extraction for assessing potential spatial distributions and severities of hazards as well as the vulnerability of a focus region for disaster risk evaluation and subsequent mitigation and preparedness activities. - Event crisis (response): Assessment and monitoring of regional extent and severities of the characteristics and impacts of a disaster to assist rapid crisis management. - Post-disaster (recovery): EO based information extraction to assist recovery activities. Within the PHAROS system a wide range of data products are used, which are varying in temporal, spatial and spectral resolution and coverage. The used sensor platforms comprise space-borne satellites and airborne systems, i.e. aircrafts as well as unmanned aerial systems (UAS)

    Semi-supervised learning with constrained virtual support vector machines for classification of remote sensing image data

    Get PDF
    We introduce two semi-supervised models for the classification of remote sensing image data. The models are built upon the framework of Virtual Support Vector Machines (VSVM). Generally, VSVM follow a two-step learning procedure: A Support Vector Machines (SVM) model is learned to determine and extract labeled samples that constitute the decision boundary with the maximum margin between thematic classes, i.e., the Support Vectors (SVs). The SVs govern the creation of so-called virtual samples. This is done by modifying, i.e., perturbing, the image features to which a decision boundary needs to be invariant. Subsequently, the classification model is learned for a second time by using the newly created virtual samples in addition to the SVs to eventually find a new optimal decision boundary. Here, we extend this concept by (i) integrating a constrained set of semilabeled samples when establishing the final model. Thereby, the model constrainment, i.e., the selection mechanism for including solely informative semi-labeled samples, is built upon a self-learning procedure composed of two active learning heuristics. Additionally, (ii) we consecutively deploy semi-labeled samples for the creation of semi-labeled virtual samples by modifying the image features of semi-labeled samples that have become semi-labeled SVs after an initial model run. We present experimental results from classifying two multispectral data sets with a sub-meter geometric resolution. The proposed semi-supervised VSVM models exhibit the most favorable performance compared to related SVM and VSVM-based approaches, as well as (semi-)supervised CNNs, in situations with a very limited amount of available prior knowledge, i.e., labeled samples

    Deep multitask learning with label interdependency distillation for multicriteria street-level image classification

    Get PDF
    Multitask learning (MTL) aims at beneficial joint solving of multiple prediction problems by sharing information across different tasks. However, without adequate consideration of interdependencies, MTL models are prone to miss valuable information. In this paper, we introduce a novel deep MTL architecture that specifically encodes cross-task interdependencies within the setting of multiple image classification problems. Based on task-wise interim class label probability predictions by an intermediately supervised hard parameter sharing convolutional neural network, interdependencies are inferred in two ways: i) by directly stacking label probability sequences to the image feature vector (i.e., multitask stacking), and ii) by passing probability sequences to gated recurrent unit-based recurrent neural networks to explicitly learn cross-task interdependency representations and stacking those to the image feature vector (i.e., interdependency representation learning). The proposed MTL architecture is applied as a tool for generic multi-criteria building characterization using street-level imagery related to risk assessments toward multiple natural hazards. Experimental results for classifying buildings according to five vulnerability-related target variables (i.e., five learning tasks), namely height, lateral load-resisting system material, seismic building structural type, roof shape, and block position are obtained for the Chilean capital Santiago de Chile. Our MTL methods with cross-task label interdependency modeling consistently outperform single task learning (STL) and classical hard parameter sharing MTL alike. Even when starting already from high classification accuracy levels, estimated generalization capabilities can be further improved by considerable margins of accumulated task-specific residuals beyond +6% κ. Thereby, the combination of multitask stacking and interdependency representation learning attains the highest accuracy estimates for the addressed task and data setting (up to cross-task accuracy mean values of 88.43% overall accuracy and 84.49% κ). From an efficiency perspective, the proposed MTL methods turn out to be substantially favorable compared to STL in terms of training time consumption

    Automatic Training Set Compilation with Multisource Geodata for DTM Generation from the TanDEM-X DSM

    Get PDF
    The TanDEM-X mission (TDM) is a spaceborne radar interferometer which delivers a global digital surface model (DSM) with a spatial resolution of 0.4 arcsec. In this letter, we propose an automatic workflow for digital terrain model (DTM) generation from TDM DSM data through additional consideration of Sentinel-2 imagery and open-source geospatial vector data. The method includes the automatic and robust compilation of training samples by imposing dedicated criteria on the multisource geodata for subsequent learning of a classification model. The model is capable of supporting the accurate distinction of elevated objects (OBJ) and bare earth (BE) measurements in the TDM DSM. Finally, a DTM is interpolated from identified BE measurements. Experimental results obtained from a test site which covers a complex and heterogeneous built environment of Santiago de Chile, Chile, underline the usefulness of the proposed workflow, since it allows for substantially increased accuracies compared to a morphological filter-based method

    Selection of Unlabeled Source Domains for Domain Adaptation in Remote Sensing

    Get PDF
    In the context of supervised learning techniques, it can be desirable to utilize existing prior knowledge from a source domain to estimate a target variable in a target domain by exploiting the concept of domain adaptation. This is done to alleviate the costly compilation of prior knowledge, i.e., training data. Here, our goal is to select a single source domain for domain adaptation from multiple potentially helpful but unlabeled source domains. The training data is solely obtained for a source domain if it was identified as being relevant for estimating the target variable in the corresponding target domain by a selection mechanism. From a methodological point of view, we propose unsupervised source selection by voting from (an ensemble of) similarity metrics that follow aligned marginal distributions regarding image features of source and target domains. Thereby, we also propose an unsupervised pruning heuristic to solely include robust similarity metrics in an ensemble voting scheme. We provide an evaluation of the methods by learning models from training data sets created with Level-of-Detail-1 building models and regress built-up density and height on Sentinel-2 satellite imagery. To evaluate the domain adaptation capability, we learn and apply models interchangeably for the four largest cities in Germany. Experimental results underline the capability of the methods to obtain more frequently higher accuracy levels with an improvement of up to almost 10 percentage points regarding the most robust selection mechanisms compared to random source-target domain selections

    Multi-target regressor chains with repetitive permutation scheme for characterization of built environments with remote sensing

    Get PDF
    Multi-task learning techniques allow the beneficial joint estimation of multiple target variables. Here, we propose a novel multi-task regression (MTR) method called ensemble of regressor chains with repetitive permutation scheme. It belongs to the family of problem transformation based MTR methods which foresee the creation of an individual model per target variable. Subsequently, the combination of the separate models allows obtaining an overall prediction. Our method builds upon the concept of so-called ensemble of regressor chains which align single-target models along a flexible permutation, i.e., chain. However, in order to particularly address situations with a small number of target variables, we equip ensemble of regressor chains with a repetitive permutation scheme. Thereby, estimates of the target variables are cascaded to subsequent models as additional features when learning along a chain, whereby one target variable can occupy multiple elements of the chain. We provide experimental evaluation of the method by jointly estimating built-up height and built-up density based on features derived from Sentinel-2 data for the four largest cities in Germany in a comparative setup. We also consider single-target stacking, multi-target stacking, and ensemble of regressor chains without repetitive permutation. Empirical results underline the beneficial performance properties of MTR methods. Our ensemble of regressor chain with repetitive permutation scheme approach achieved most frequently the highest accuracies compared to the other MTR methods, whereby mean improvements across the experiments of 14.5% compared to initial single-target models could be achieved

    Deep Neural Network Regression for Normalized Digital Surface Model Generation with Sentinel-2 Imagery

    Get PDF
    In recent history, normalized digital surface models (nDSMs) have been constantly gaining importance as a means to solve large-scale geographic problems. High-resolution surface models are precious, as they can provide detailed information for a specific area. However, measurements with a high resolution are time consuming and costly. Only a few approaches exist to create high-resolution nDSMs for extensive areas. This article explores approaches to extract high-resolution nDSMs from low-resolution Sentinel-2 data, allowing us to derive large-scale models. We thereby utilize the advantages of Sentinel 2 being open access, having global coverage, and providing steady updates through a high repetition rate. Several deep learning models are trained to overcome the gap in producing high-resolution surface maps from low-resolution input data. With U-Net as a base architecture, we extend the capabilities of our model by integrating tailored multiscale encoders with differently sized kernels in the convolution as well as conformed self-attention inside the skip connection gates. Using pixelwise regression, our U-Net base models can achieve a mean height error of approximately 2 m. Moreover, through our enhancements to the model architecture, we reduce the model error by more than 7%

    Earth observation-based disaggregation of exposure data for earthquake loss modelling

    Get PDF
    We use TanDEM-X and Sentinel-2 observations to disaggregate earthquake risk-related exposure data. We use the refined exposure data and model earthquake loss. Results for the city of Santiago de Chile show that earthquake risk has been underestimated before due to aggregated exposure data

    Earth observation-based disaggregation of exposure data for earthquake loss modeling

    Get PDF
    We use TanDEM-X and Sentinel-2 observations to disaggregate earthquake risk-related exposure data. We use the refined exposure data and model earthquake loss. Results for the city of Santiago de Chile show that earthquake risk has been underestimated before due to aggregated exposure data

    Benefits of global earth observation missions for disaggregation of exposure data and earthquake loss modeling: evidence from Santiago de Chile

    Get PDF
    Exposure is an essential component of risk models and describes elements that are endangered by a hazard and susceptible to damage. The associated vulnerability characterizes the likelihood of experiencing damage (which can translate into losses) at a certain level of hazard intensity. Frequently, the compilation of exposure information is the costliest component (in terms of time and labor) of risk assessment procedures. Existing models often describe exposure in an aggregated manner, e.g., by relying on statistical/census data for given administrative entities. Nowadays, earth observation techniques allow the collection of spatially continuous information for large geographic areas while enabling a high geometric and temporal resolution. Consequently, we exploit measurements from the earth observation missions TanDEM-X and Sentinel-2, which collect data on a global scale, to characterize the built environment in terms of constituting morphologic properties, namely built-up density and height. Subsequently, we use this information to constrain existing exposure data in a spatial disaggregation approach. Thereby, we establish dasymetric methods for disaggregation. The results are presented for the city of Santiago de Chile, which is prone to natural hazards such as earthquakes. We present loss estimations due to seismic ground shaking and corresponding sensitivity as a function of the resolution properties of the exposure data used in the model. The experimental results underline the benefits of deploying modern earth observation technologies for refined exposure mapping and related earthquake loss estimation with enhanced accuracy properties
    corecore